367 research outputs found
Recommended from our members
Algorithm Based Fault Tolerance in Massively Parallel Systems
An A complex computer system consists of billions of transistors, miles of wires, and many interactions with an unpredictable environment. Correct results must be produced despite faults that dynamically occur in some of these components. Many techniques have been developed for fault tolerant computation. General purpose methods are independent of the application, yet incur an overhead cost which may be unacceptable for massively parallel systems. Algorithm-specific methods, which can operate at lower cost, are a developing alternative [1, 72]. This paper first reviews the general-purpose approach and then focuses on the algorithm-specific method, with an eye toward massively parallel processors. Algorithm-based fault tolerance has the attraction of low overhead; furthermore it addresses both the detection and also the correction problems. The principle is to build low-cost checking and correcting mechanism based exclusively on the redundancies inherent in the system
Recommended from our members
Energy-Based Segmentation of Very Sparse Range Surfaces
This paper describes a new segmentation technique for very sparse surfaces which is based on minimizing the energy of the surfaces in the scene. While it could be used in almost any system as part of surface reconstruction/model recovery, the algorithm is designed to be usable when the depth information is scattered and very sparse, as is generally the case with depth generated by stereo algorithms. We show results from a sequential algorithm that processes laser range-finder data or synthetic data. We then discuss a parallel implementation running on the parallel Connection Machine. The idea of segmentation by energy minimization is not new. However, prior techniques have relied on discrete regularization or Markov random fields to model the surfaces to build smooth surfaces and detect depth edges. Both of the aforementioned techniques are ineffective at energy minimization for very sparse data. In addition, this method does not require edge detection and is thus also applicable when edge information is unreliable or unavailable. Our model is extremely general; it does not depend on a model of the surface shape but only on the energy for bending a surface. Thus the surfaces can grow in a more data-directed manner. The technique presented herein models the surfaces with reproducing kernel-based splines, which can be shown to solve a regularized surface reconstruction problem. From the functional form of these splines we derive computable bounds on the energy of a surface over a given finite region. The computation of the spline, and the corresponding surface representation are quite efficient for very sparse data. An interesting property of the algorithm is that it makes no attempt to determine segmentation boundaries; the algorithm can be viewed as a classification scheme which segments the data into collections of points which are "from" the same surface. Among the significant advantages of the method is the capacity to process overlapping transparent surfaces, as well as surfaces with large occluded areas
Recommended from our members
||PSL: A Parallel Lisp for the DADO Machine
We describe a system level programming language and integrated environment for programming development on the DADO parallel computer. In addition a set of language constructs augmenting LISP for programming parallel computation on tree structured parallel machine are defined. We discuss the architecture or the DADO machine and present several examples to illustrate the language. In particular we describe how the language provides an integrated approach to the problem or parallel software design. Parallel algorithms may be designed analyzed on a sequential machine under simulation and then simply recompiled to run on a parallel machine. In concluding sections we outline the implementation using the Portable Standard LISP Compiler
Recommended from our members
An Overview of the DADO Parallel Computer
DADO is a special purpose parallel computer designed for the rapid execution of artificial intelligence expert systems. This article discusses the DADO hardware and software systems with emphasis on the question of granularity. DADO is designed as a fine-grain machine constructed from many thousands of processing elements (PEs) interconnected in a complete binary tree. Two prototype systems, DADOI and DAD02, are detailed. Each PE of these prototypes consists of a commercially available microprocessor chip, memory Chips, and an additional semicustom I/O processor designed at Columbia University. The software includes a kernel and parallel languages. Under development are several artificial intelligence systems, including a production system interpreter, a logic programming language, and an expert system building tool
On Inflation with Non-minimal Coupling
A simple realization of inflation consists of adding the following operators
to the Einstein-Hilbert action: (partial phi)^2, lambda phi^4, and xi phi^2 R,
with xi a large non-minimal coupling. Recently there has been much discussion
as to whether such theories make sense quantum mechanically and if the inflaton
phi can also be the Standard Model Higgs. In this note we answer these
questions. Firstly, for a single scalar phi, we show that the quantum field
theory is well behaved in the pure gravity and kinetic sectors, since the
quantum generated corrections are small. However, the theory likely breaks down
at ~ m_pl / xi due to scattering provided by the self-interacting potential
lambda phi^4. Secondly, we show that the theory changes for multiple scalars
phi with non-minimal coupling xi phi dot phi R, since this introduces
qualitatively new interactions which manifestly generate large quantum
corrections even in the gravity and kinetic sectors, spoiling the theory for
energies > m_pl / xi. Since the Higgs doublet of the Standard Model includes
the Higgs boson and 3 Goldstone bosons, it falls into the latter category and
therefore its validity is manifestly spoiled. We show that these conclusions
hold in both the Jordan and Einstein frames and describe an intuitive analogy
in the form of the pion Lagrangian. We also examine the recent claim that
curvature-squared inflation models fail quantum mechanically. Our work appears
to go beyond the recent discussions.Comment: 14 pages, 2 figures. Version 2: Clarified findings and improved
wording. Elaborated important sections and removed an unnecessary section.
Added references. Version 3: Updated towards JHEP version. Version 4: Final
JHEP versio
Prevalence and characteristics of game transfer phenomena: a descriptive survey study
Previous qualitative studies suggest that gamers experience Game Transfer Phenomena (GTP), a variety of non-volitional phenomena related to playing videogames including thoughts, urges, images, sounds when not playing. To investigate (i) which types of GTP were more common and (ii) their general characteristics, the present study surveyed a total of 2,362 gamers via an online survey. The majority of the participants were male, students, aged between 18 and 27 years, and 'hard-core' gamers. Most participants reported having experienced at least one type of GTP at some point (96.6%), the majority having experienced GTP more than once with many reporting 6 to 10 different types of GTP. Results demonstrated that videogame players experienced (i) altered visual perceptions (ii) altered auditory perceptions (iii) altered body perceptions (iv) automated mental processes, and (v) behaviors. In most cases, GTP could not be explained by being under the influence of a psychoactive substance. The GTP experiences were usually shortlived, tended to occur after videogame playing rather than during play, occurred recurrently, and usually occurred while doing day-to-day activities. One in five gamers had experienced some type of distress or dysfunction due to GTP. Many experienced GTP as pleasant and some wanted GTP to happen again
Mechanism of injury and special considerations as predictive of serious injury: A systematic review.
Objectives: The Centers for Disease Control and Prevention\u27s field triage guidelines (FTG) are routinely used by emergency medical services personnel for triaging injured patients. The most recent (2011) FTG contains physiologic, anatomic, mechanism, and special consideration steps. Our objective was to systematically review the criteria in the mechanism and special consideration steps that might be predictive of serious injury or need for a trauma center. Methods: We conducted a systematic review of the predictive utility of mechanism and special consideration criteria for predicting serious injury. A research librarian searched in Ovid Medline, EMBASE, and the Cochrane databases for studies published between January 2011 and February 2021. Eligible studies were identified using a priori inclusion and exclusion criteria. Studies were excluded if they lacked an outcome for serious injury, such as measures of resource use, injury severity scores, mortality, or composite measures using a combination of outcomes. Given the heterogeneity in populations, measures, and outcomes, results were synthesized qualitatively focusing on positive likelihood ratios (LR+) whenever these could be calculated from presented data or adjusted odds ratios (aOR
HERALD (Health Economics using Routine Anonymised Linked Data)
<b>Background</b>
Health economic analysis traditionally relies on patient derived questionnaire data, routine datasets, and outcomes data from experimental randomised control trials and other clinical studies, which are generally used as stand-alone datasets. Herein, we outline the potential implications of linking these datasets to give one single joined up data-resource for health economic analysis.<p></p>
<b>Method</b>
The linkage of individual level data from questionnaires with routinely-captured health care data allows the entire patient journey to be mapped both retrospectively and prospectively. We illustrate this with examples from an Ankylosing Spondylitis (AS) cohort by linking patient reported study dataset with the routinely collected general practitioner (GP) data, inpatient (IP) and outpatient (OP) datasets, and Accident and Emergency department data in Wales. The linked data system allows: (1) retrospective and prospective tracking of patient pathways through multiple healthcare facilities; (2) validation and clarification of patient-reported recall data, complementing the questionnaire/routine data information; (3) obtaining objective measure of the costs of chronic conditions for a longer time horizon, and during the pre-diagnosis period; (4) assessment of health service usage, referral histories, prescribed drugs and co-morbidities; and (5) profiling and stratification of patients relating to disease manifestation, lifestyles, co-morbidities, and associated costs.<p></p>
<b>Results</b>
Using the GP data system we tracked about 183 AS patients retrospectively and prospectively from the date of questionnaire completion to gather the following information: (a) number of GP events; (b) presence of a GP 'drug' read codes; and (c) the presence of a GP 'diagnostic' read codes. We tracked 236 and 296 AS patients through the OP and IP data systems respectively to count the number of OP visits; and IP admissions and duration. The results are presented under several patient stratification schemes based on disease severity, functions, age, sex, and the onset of disease symptoms.<p></p>
<b>Conclusion</b>
The linked data system offers unique opportunities for enhanced longitudinal health economic analysis not possible through the use of traditional isolated datasets. Additionally, this data linkage provides important information to improve diagnostic and referral pathways, and thus helps maximise clinical efficiency and efficiency in the use of resources
- …